Search for a command to run...
Nathan discusses his son Ernie's cancer treatment progress, provides an in-depth analysis of the current AI landscape by examining the strengths and potential weaknesses of Google DeepMind, OpenAI, Anthropic, and xAI, and shares his thoughts on model performance, technological advancements, and the companies' strategies in the AI race.
Demis Hassabis discusses Google DeepMind's path to artificial general intelligence, exploring the challenges of building AI systems with reasoning, creativity, and consistent behavior across cognitive tasks, while also highlighting potential breakthroughs in science, health, and technology.
Adam Marblestone explores how the brain learns efficiently through complex reward functions and omnidirectional inference, discussing potential insights for AI development from neuroscience and the importance of understanding the brain's learning mechanisms.
Emmett Shear and Séb Krier explore the flaws in current AI alignment approaches, arguing for a more organic, process-oriented method that treats AI as potential beings with evolving goals and the capacity for care, rather than mere tools to be controlled.
An in-depth exploration of the critical AI security crisis, revealing how current AI systems are vulnerable to prompt injection and jailbreaking attacks, and why existing guardrails are ineffective as AI agents gain more power to take real-world actions.
A live podcast episode featuring conversations with Alex Boris, Dean Ball, and Peter Wildeford exploring AI developments, policy challenges, and forecasts for 2026, covering topics like the RAISE Act, chip sales to China, AI agent capabilities, and potential technological paradigm shifts.
In this episode of Moonshots, Peter Diamandis and his guests share ten bold predictions for 2026, ranging from space races and AI solving mathematical problems to level five autonomous robots and breakthrough epigenetic reprogramming, highlighting the exponential technological changes expected in the coming year.
A year-end live show featuring nine rapid-fire conversations exploring AI's landscape in 2025-2026, with discussions ranging from AI safety and technological unemployment to scientific research, continual learning architectures, and the evolving capabilities of frontier AI models.
In this episode, Sebastian Borgeaud, a pre-training lead for Gemini 3 at Google DeepMind, discusses the landmark model's development, exploring the shift from "infinite data" to a data-limited regime, the importance of research taste, and the evolving landscape of AI pre-training and model capabilities.
Sebastian Seung discusses his groundbreaking work on the fly connectome and his new startup Memazing, which aims to create digital brain emulations by mapping and simulating neural connections, with the ultimate goal of understanding intelligence and potentially transcending biological constraints.
In this episode, Marek Kozlowski discusses Poland's sovereign AI strategy with Project PLUM, focusing on creating small, locally-adapted language models that preserve Polish cultural nuances, offer cost advantages, and provide on-premise solutions for businesses and government sectors.
A deep dive into Z.ai's innovative AI development culture, exploring their approach to model training, global branding, multilingual capabilities, and the unique challenges and opportunities in the Chinese AI landscape.
A groundbreaking startup called Hertha Metals is revolutionizing steel production in the United States by developing a cleaner, more cost-effective process that uses natural gas instead of coal, potentially reducing emissions by 50% and production costs by up to 30%.
In this episode, Pablos Holman, a hacker and inventor, discusses his journey through technology, from early computer hacking to working with Blue Origin and Intellectual Ventures, and shares his vision for deep tech innovation that can solve big global problems.
Emmett Shear challenges the current AI alignment paradigm, proposing "organic alignment" that focuses on teaching AI systems to genuinely care about humans through multi-agent simulations, emphasizing alignment as an ongoing process of learning and growth rather than a fixed set of controls.
Henrik Werdelin discusses how AI is democratizing entrepreneurship, enabling a new model of "portfolio entrepreneurship" where founders create multiple AI agents to serve a specific customer group and solve their problems.
A wide-ranging exploration of a potential positive AI future, covering transformative applications from self-driving cars and personalized tutoring to radically improved health, while balancing excitement for technological progress with thoughtful consideration of potential risks.
Google's Nano Banana image model achieves breakthrough character consistency by leveraging Gemini's multimodal capabilities, high-quality data, and human evaluation, enabling users to see themselves in AI-generated worlds through intuitive and personalized visual creation.
In this episode, Anthropic's Cat Wu and Boris Cherny discuss the creation and evolution of Claude Code, a revolutionary CLI-based AI coding tool that transforms engineering workflows through its innovative agent architecture and extensible design.
In this episode, Google DeepMind developers discuss the creation of Nano Banana, a groundbreaking image generation model that allows for personalized, conversational image editing with unprecedented character consistency and creative potential.
Julian Schrittwieser from Anthropic discusses the exponential trajectory of AI capabilities, predicting that models will achieve full-day autonomous task completion by 2026 and expert-level performance across many professions by 2027, while exploring how pre-training combined with reinforcement learning enables AI agents to make novel scientific discoveries and potentially earn Nobel Prizes.
A discussion with Liam Fedus and Ekin Dogus Cubuk about founding Periodic Labs, an AI research company aimed at accelerating scientific discovery by training AI systems to conduct physics and chemistry experiments through real-world feedback and iteration.
Liam Fedus and Ekin Dogus Cubuk discuss their startup Periodic Labs, which aims to train AI systems to accelerate scientific discovery by using physical experiments as a reinforcement learning signal, with a focus on discovering high-temperature superconductors.
A wide-ranging discussion with Far.AI CEO Adam Gleave exploring AI safety, potential post-AGI futures, alignment strategies, and the organization's approach to developing technical and policy solutions across the entire AI safety ecosystem.
A technical journey through the evolution of generative media, focusing on FAL's strategic pivot to specialize in optimizing image and video model inference, scaling from a few developers to serving over 2 million developers with 350 unique models across image, video, and audio generation.
Sir David Spiegelhalter shares insights on statistics, uncertainty, and the importance of communicating evidence transparently, drawing from his experiences in medical research, public health crises, and scientific communication. He emphasizes the need for resilience, taking calculated risks, and approaching complex challenges with an open mind that acknowledges limitations and seeks diverse perspectives.